Results

As described above, learning data for all participants were split into the first 6 stimulus iterations and the last 6 before model fitting. Additionally, the two conditions, set-size 3 and set-size 6 were also fit separately.

Overview of modelfitting results

Figure 1

Figure 1

We found that the LTM model still fit more subjects in both halves (first half- LTM: M = 66.5, RL: M = 16.5; second half- LTM: M = 61.5; RL: M = 21.5), and conditions (set-size 3 - LTM: M = 67, RL: M = 16; set-size 6 - LTM: M = 61, RL: M = 22) much like the results obtained through the model fitting procedure in experiment 1 (Figure 1). Furthermore, more subjects fit the LTM model in the set-size 3 condition compared to the set-size 6 condition (more in the first half than in the second half, for both), which means, for those subjects that fit the RL model, higher numbers of subjects fit RL model for set-size 6 conditions than set-size 3 (more in the second half than in the first half for both conditions). This trend aligns more with Collins (2018) findings but these results do not take into account individual dynamics (covered in detail below).

Overview of model-fitting quality

The plot below shows mean BIC value for the best fitting model for each of the halves and set-sizes.

#> # A tibble: 20 × 8
#>    subjects name     mod.id  model index condition  parameter param_vals
#>       <int> <chr>    <chr>   <chr> <int> <chr>      <chr>          <dbl>
#>  1     6209 half1_N3 LTM_105 LTM     105 set3_learn alpha        NaN    
#>  2     6209 half1_N3 LTM_105 LTM     105 set3_learn egs          NaN    
#>  3     6209 half1_N3 LTM_105 LTM     105 set3_learn bll            1.41 
#>  4     6209 half1_N3 LTM_105 LTM     105 set3_learn imag          -1.41 
#>  5     6209 half1_N3 LTM_105 LTM     105 set3_learn ans            1.41 
#>  6     6209 half1_N6 LTM_79  LTM      79 set6_learn alpha        NaN    
#>  7     6209 half1_N6 LTM_79  LTM      79 set6_learn egs          NaN    
#>  8     6209 half1_N6 LTM_79  LTM      79 set6_learn bll            0.704
#>  9     6209 half1_N6 LTM_79  LTM      79 set6_learn imag          -1.41 
#> 10     6209 half1_N6 LTM_79  LTM      79 set6_learn ans            0.704
#> 11     6209 half2_N3 LTM_101 LTM     101 set3_learn alpha        NaN    
#> 12     6209 half2_N3 LTM_101 LTM     101 set3_learn egs          NaN    
#> 13     6209 half2_N3 LTM_101 LTM     101 set3_learn bll            1.41 
#> 14     6209 half2_N3 LTM_101 LTM     101 set3_learn imag          -1.41 
#> 15     6209 half2_N3 LTM_101 LTM     101 set3_learn ans           -1.41 
#> 16     6209 half2_N6 RL_24   RL       24 set6_learn alpha          1.39 
#> 17     6209 half2_N6 RL_24   RL       24 set6_learn egs            0.693
#> 18     6209 half2_N6 RL_24   RL       24 set6_learn bll          NaN    
#> 19     6209 half2_N6 RL_24   RL       24 set6_learn imag         NaN    
#> 20     6209 half2_N6 RL_24   RL       24 set6_learn ans          NaN

Dynamics

Does the best fit model change from 1st half to second half and between the two set-sizes?

Next, we sought to track learning dynamics for each individual learner. In other words, we wanted to see if learners changed strategies in response to 1) their learning experience, by comparing model fits to the first half and second half of learning, 2) task demands, by comparing model fits to th two set-size conditions, and, 3) interactions between the two.

We found that over 81% of learners who fit the LTM model in the first half also fit that model in the second half of learning (set-size 3: 1200%; set-size 6: 1200%). In contrast, more than 50% of those subjects who were best fit by the RL model in the first half also fit the RL model in the second half (set-size 3: 600%; set-size 6: 800%), the rest shifted to LTM.

Patterns of model fits for the set-sizes were similar to the first-half - second-half fits above. More than 75% of subjects who fit the LTM model during set-size 3 trials also fit the LTM model in set-size 6, and these are largely the same across the first and second half of the task (half 1: 5400%; half 2: 4900%). In contrast, fewer numbers of subjects who fit the RL model for set-size 3 blocks also fit RL in the set-size 6 blocks; and these numbers differ between half 1 and half 2 (half 1: 400%; half 2: 900%)

How do the groups in Experiment 1 fare in Experiment 2?

Let us assume that there are high RL learners (most likely to fit RL), high LTM learners (most likely to fit LTM) and those in the middle that are likely to fit the combination models. If we fit the set-size 3 and 6 parts separately, how would these three different groups behave? What would we learn about their meta-cognition? We expect that the people in the extremes would use the same strategy for the two set-sizes, and, the people in the center would perhaps respond more to task demands and use LTM for s3 and RL for s6, as Collins predicts.

Group differences in performance (accuracy) for switchers vs non-switchers

#> Analysis of Variance Table
#> 
#> Response: mean.acc
#>                                  Df Sum Sq Mean Sq  F value    Pr(>F)    
#> stable                            1 0.0766 0.07664   5.8039  0.016276 *  
#> half                              1 0.0205 0.02052   1.5544  0.212947    
#> setSize_model                     3 1.2333 0.41111  31.1348 < 2.2e-16 ***
#> phase                             1 2.2786 2.27861 172.5672 < 2.2e-16 ***
#> stable:half                       1 0.1639 0.16391  12.4137  0.000457 ***
#> stable:setSize_model              3 0.1267 0.04225   3.1995  0.022974 *  
#> half:setSize_model                3 0.0314 0.01048   0.7934  0.497834    
#> stable:phase                      1 0.0729 0.07286   5.5179  0.019130 *  
#> half:phase                        1 0.0447 0.04469   3.3843  0.066288 .  
#> setSize_model:phase               3 0.6658 0.22192  16.8069 1.628e-10 ***
#> stable:half:setSize_model         3 0.0718 0.02394   1.8128  0.143535    
#> stable:half:phase                 1 0.0141 0.01406   1.0652  0.302436    
#> stable:setSize_model:phase        3 0.0084 0.00280   0.2124  0.887841    
#> half:setSize_model:phase          3 0.0069 0.00230   0.1740  0.914010    
#> stable:half:setSize_model:phase   3 0.0143 0.00475   0.3599  0.781961    
#> Residuals                       632 8.3450 0.01320                       
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

|| || || ||

All the subjects that fit RL in set-size 6 and fit LTM in set-size 3 had lower learning accuracy in set-size 6 but decay was minimal during test. But overall accuracy during test is the same for set-size 3 and 6. Learning in set-size 3 is successful and high for LTM and RL and it is similar to learning set-size 6 with LTM. Subjects who fit the RL model in set-size 3 had high accuracy at both learning and test and they also had high accuracy with minimal decay during set-size 6 even though they fit the LTM model best.

For those subjects who did not fit different models for the two set-sizes, the LTM group is associated with higher decay during test than the RL group, who have very minimal decay on average.

term df sumsq meansq statistic p.value
stable 1 0.0185432 0.0185432 3.1381286 0.0774453
half 1 0.0023208 0.0023208 0.3927518 0.5313090
condition 1 0.2555172 0.2555172 43.2421367 0.0000000
model 1 0.5411171 0.5411171 91.5752982 0.0000000
stable:half 1 0.0514896 0.0514896 8.7137825 0.0033946
stable:condition 1 0.0089605 0.0089605 1.5164268 0.2190767
half:condition 1 0.0012936 0.0012936 0.2189290 0.6401797
stable:model 1 0.0650626 0.0650626 11.0107824 0.0010116
half:model 1 0.0075093 0.0075093 1.2708339 0.2604656
condition:model 1 0.2976787 0.2976787 50.3772889 0.0000000
stable:half:condition 1 0.0178245 0.0178245 3.0165133 0.0833950
stable:half:model 1 0.0087152 0.0087152 1.4749077 0.2254797
stable:condition:model 1 0.0763722 0.0763722 12.9247560 0.0003761
half:condition:model 1 0.0000000 0.0000000 0.0000015 0.9990233
stable:half:condition:model 1 0.0224305 0.0224305 3.7960057 0.0522600
Residuals 316 1.8672395 0.0059090 NA NA

Are the parameter values largely different for each half and set-size?

alpha egs bll imag ans
0.15 0.3 0.5 0.3 0.3

Individual plots